91 research outputs found
Recommended from our members
Automatic inference of cross-modal connection topologies for X-CNNs
This paper introduces a way to learn cross-modal convolutional neural network
(X-CNN) architectures from a base convolutional network (CNN) and the training
data to reduce the design cost and enable applying cross-modal networks in
sparse data environments. Two approaches for building X-CNNs are presented. The
base approach learns the topology in a data-driven manner, by using
measurements performed on the base CNN and supplied data. The iterative
approach performs further optimisation of the topology through a combined
learning procedure, simultaneously learning the topology and training the
network. The approaches were evaluated agains examples of hand-designed X-CNNs
and their base variants, showing superior performance and, in some cases,
gaining an additional 9% of accuracy. From further considerations, we conclude
that the presented methodology takes less time than any manual approach would,
whilst also significantly reducing the design complexity. The application of
the methods is fully automated and implemented in Xsertion library
ChronoMID—Cross-modal neural networks for 3-D temporal medical imaging data
ChronoMID—neural networks for temporally-varying, hence Chrono, Medical Imaging Data—makes the novel application of cross-modal convolutional neural networks (X-CNNs) to the medical domain. In this paper, we present multiple approaches for incorporating temporal information into X-CNNs and compare their performance in a case study on the classification of abnormal bone remodelling in mice. Previous work developing medical models has predominantly focused on either spatial or temporal aspects, but rarely both. Our models seek to unify these complementary sources of information and derive insights in a bottom-up, data-driven approach. As with many medical datasets, the case study herein exhibits deep rather than wide data; we apply various techniques, including extensive regularisation, to account for this. After training on a balanced set of approximately 70000 images, two of the models—those using difference maps from known reference points—outperformed a state-of-the-art convolutional neural network baseline by over 30pp (> 99% vs. 68.26%) on an unseen, balanced validation set comprising around 20000 images. These models are expected to perform well with sparse data sets based on both previous findings with X-CNNs and the representations of time used, which permit arbitrarily large and irregular gaps between data points. Our results highlight the importance of identifying a suitable description of time for a problem domain, as unsuitable descriptors may not only fail to improve a model, they may in fact confound it
Recommended from our members
Scientific Workflows on Clouds with Heterogeneous and Preemptible Instances
Recommended from our members
Teaching sustainability as complex systems approach: a sustainable development goals workshop
Purpose
Approaches to solving sustainability problems require a specific problem-solving mode, encompassing the complexity, fuzziness and interdisciplinary nature of the problem. This paper aims to promote a complex systems’ view of addressing sustainability problems, in particular through the tool of network science, and provides an outline of an interdisciplinary training workshop.
Design/methodology/approach
The topic of the workshop is the analysis of the Sustainable Development Goals (SDGs) as a political action plan. The authors are interested in the synergies and trade-offs between the goals, which are investigated through the structure of the underlying network. The authors use a teaching approach aligned with sustainable education and transformative learning.
Findings
Methodologies from network science are experienced as valuable tools to familiarise students with complexity and to handle the proposed case study.
Originality/value
To the best of the authors’ knowledge, this is the first work which uses network terminology and approaches to teach sustainability problems. This work highlights the potential of network science in sustainability education and contributes to accessible material.
</jats:sec
The Impact of Heterogeneity and Awareness in Modeling Epidemic Spreading on Multiplex Networks.
In the real world, dynamic processes involving human beings are not disjoint. To capture the real complexity of such dynamics, we propose a novel model of the coevolution of epidemic and awareness spreading processes on a multiplex network, also introducing a preventive isolation strategy. Our aim is to evaluate and quantify the joint impact of heterogeneity and awareness, under different socioeconomic conditions. Considering, as case study, an emerging public health threat, Zika virus, we introduce a data-driven analysis by exploiting multiple sources and different types of data, ranging from Big Five personality traits to Google Trends, related to different world countries where there is an ongoing epidemic outbreak. Our findings demonstrate how the proposed model allows delaying the epidemic outbreak and increasing the resilience of nodes, especially under critical economic conditions. Simulation results, using data-driven approach on Zika virus, which has a growing scientific research interest, are coherent with the proposed analytic model.This work was partially supported by the following Research Grant: Italian Ministry of University and Research - MIUR “Programma Operativo Nazionale Ricerca e Competitività 2007–2013” within the project “PON-03PE-00132-1” - Servify
Recommended from our members
Now you see me (CME): Concept-based model extraction
Deep Neural Networks (DNNs) have achieved remarkable performance on a range
of tasks. A key step to further empowering DNN-based approaches is improving
their explainability. In this work we present CME: a concept-based model
extraction framework, used for analysing DNN models via concept-based extracted
models. Using two case studies (dSprites, and Caltech UCSD Birds), we
demonstrate how CME can be used to (i) analyse the concept information learned
by a DNN model (ii) analyse how a DNN uses this concept information when
predicting output labels (iii) identify key concept information that can
further improve DNN predictive performance (for one of the case studies, we
showed how model accuracy can be improved by over 14%, using only 30% of the
available concepts)
Modelling trait-dependent speciation with approximate Bayesian computation
Phylogeny is the field of modelling the temporal discrete dynamics of
speciation. Complex models can nowadays be studied using the Approximate
Bayesian Computation approach which avoids likelihood calculations. The field's
progression is hampered by the lack of robust software to estimate the numerous
parameters of the speciation process. In this work we present an R package,
pcmabc, based on Approximate Bayesian Computations, that implements three novel
phylogenetic algorithms for trait-dependent speciation modelling. Our
phylogenetic comparative methodology takes into account both the simulated
traits and phylogeny, attempting to estimate the parameters of the processes
generating the phenotype and the trait. The user is not restricted to a
predefined set of models and can specify a variety of evolutionary and
branching models. We illustrate the software with a simulation-reestimation
study focused around the branching Ornstein-Uhlenbeck process, where the
branching rate depends non-linearly on the value of the driving
Ornstein-Uhlenbeck process. Included in this work is a tutorial on how to use
the software
From Infection to Immunity: Understanding the Response to SARS-CoV2 Through In-Silico Modeling.
BACKGROUND: Immune system conditions of the patient is a key factor in COVID-19 infection survival. A growing number of studies have focused on immunological determinants to develop better biomarkers for therapies. AIM: Studies of the insurgence of immunity is at the core of both SARS-CoV-2 vaccine development and therapies. This paper attempts to describe the insurgence (and the span) of immunity in COVID-19 at the population level by developing an in-silico model. We simulate the immune response to SARS-CoV-2 and analyze the impact of infecting viral load, affinity to the ACE2 receptor, and age in an artificially infected population on the course of the disease. METHODS: We use a stochastic agent-based immune simulation platform to construct a virtual cohort of infected individuals with age-dependent varying degrees of immune competence. We use a parameter set to reproduce known inter-patient variability and general epidemiological statistics. RESULTS: By assuming the viremia at day 30 of the infection to be the proxy for lethality, we reproduce in-silico several clinical observations and identify critical factors in the statistical evolution of the infection. In particular, we evidence the importance of the humoral response over the cytotoxic response and find that the antibody titers measured after day 25 from the infection are a prognostic factor for determining the clinical outcome of the infection. Our modeling framework uses COVID-19 infection to demonstrate the actionable effectiveness of modeling the immune response at individual and population levels. The model developed can explain and interpret observed patterns of infection and makes verifiable temporal predictions. Within the limitations imposed by the simulated environment, this work proposes quantitatively that the great variability observed in the patient outcomes in real life can be the mere result of subtle variability in the infecting viral load and immune competence in the population. In this work, we exemplify how computational modeling of immune response provides an important view to discuss hypothesis and design new experiments, in particular paving the way to further investigations about the duration of vaccine-elicited immunity especially in the view of the blundering effect of immunosenescence
Improving QoE in multi-layer social sensing: A cognitive architecture and game theoretic model
This paper proposes a novel cognitive architecture and game-theoretic model for resource sharing among netizens, thus improving their quality of experience (QoE) in multi-layer social sensing environments. The underlying approach is to quantify micro-rewards and inequalities derived from social multi-layer interactions. Specifically, we model our society as a social multi-layer network of individuals or groups of individuals (nodes), where the layers represent multiple channels of interactions (on various services). The weighted edges correspond to the multiple social relationships between nodes participating in diferent services, refecting the importance assigned to each of these edges and are defned based on the concepts of awareness and homophily. Heterogeneity, both interactions-wise on the multiple layers and related to homophily between individuals, on each node and layer of a weighted multiplex network produces a complex multi-scale interplay between nodes in the multi-layer structure. Applying game theory, we quantify the impact of heterogeneity on the evolutionary dynamics of social sensing through a data driven approach based on the propagation of individual-level micro-afrmations and micro-inequalities. The micro-packets of energy continuously exchanged between nodes may impact positively or negatively on their social behaviors, producing peaks of extreme dissatisfaction and in some cases a form of distress. Quantifying the evolutionary dynamics of human behaviors enables the detection of such peaks in the population and enable us design a targeted control mechanism, where social rewards and self-healing help improve the QoE of the netizens
- …